Back to index

Human-Centered AI

Tags: #technology #ai #ethics #design #user experience #society #future

Authors: Ben Shneiderman

Overview

In “Human-Centered AI,” I argue for a fundamental shift in the way we design and develop artificial intelligence (AI). The current focus on algorithm optimization and machine autonomy has led to impressive technical achievements, but also to significant concerns about reliability, safety, bias, and the potential for job displacement and societal harm. I propose a new paradigm – Human-Centered AI (HCAI) – that prioritizes human control, values, and well-being. This approach seeks to build supertools that amplify human abilities and enhance human performance, rather than replacing humans with machines.

The book is intended for a broad audience, including AI researchers, developers, business leaders, policy-makers, and the general public. It aims to provoke a conversation about the future of AI, urging stakeholders to consider the societal and ethical implications of these technologies and to advocate for a more human-centered approach. I believe that HCAI offers a path to a more positive future, one where technology empowers people to live healthier, more fulfilling, and more equitable lives.

The book explores a range of topics, from the philosophical foundations of rationalism and empiricism to the practical challenges of designing reliable, safe, and trustworthy AI systems. I introduce a two-dimensional framework that separates levels of automation from levels of human control, allowing designers to find the optimal balance for each application. I also discuss design metaphors that can guide HCAI development, such as the concept of supertools and control centers. Finally, I outline concrete recommendations for governance structures, including software engineering practices, safety culture management, independent oversight, and government regulation, to ensure responsible AI development.

Throughout the book, I draw on real-world examples, including successes and failures of AI systems, to illustrate the key concepts and challenges. I highlight the importance of user experience design, data visualization, and ethical considerations in creating HCAI systems. While acknowledging the anxieties surrounding AI, I maintain an optimistic view, believing that a human-centered approach can help us harness the transformative power of AI for good.

Book Outline

1. Introduction: High Expectations

Technology should be designed to empower people rather than replace them. Artificial intelligence (AI) research has primarily focused on algorithm performance and machine autonomy, but a new synthesis is emerging: Human-Centered AI (HCAI). HCAI places equal emphasis on human users and other stakeholders, valuing meaningful human control, and designing systems that align with human values like self-efficacy, creativity, responsibility, and social connections.

Key concept: This book proposes a new synthesis in which AI-based intelligent algorithms are combined with human-centered thinking to make HCAI. This approach will increase the chance that technology will empower rather than replace people.

2. How Do Rationalism and Empiricism Provide Sound Foundations?

There’s a long-standing debate between rationalism and empiricism that continues to shape AI. Rationalism emphasizes logic, rules, and well-defined boundaries, often leading to a focus on algorithm optimization and automation. Empiricism, on the other hand, stresses real-world observation, acknowledging the complexity, uncertainty, and evolving nature of human contexts and needs. Both philosophies have value, and HCAI seeks to blend them.

Key concept: Rationalists believe in logical thinking… They have confidence in the perfectability of rules and the strength of formal methods of logic and mathematical proofs. They assume the constancy of well-defined boundaries — like hot and cold, wet and dry… Empiricists believe that researchers must get out of their offices and labs to sense the real world in all its contextual complexity, diversity, and uncertainty.

3. Are People and Computers in the Same Category?

Humans and computers are fundamentally different categories. While AI often aims to replicate human intelligence, HCAI recognizes the unique creativity and capabilities of humans. Instead of trying to replace humans with machines, we should focus on building supertools that amplify and enhance human performance.

Key concept: Blurring the boundaries between people and computers diminishes appreciation of the richness, diversity, and creativity of people… Making a robot that simulates what a human does has value, but I’m more attracted to making supertools that dramatically amplify human abilities by a hundred-or thousand-fold.

4. Will Automation, AI, and Robots Lead to Widespread Unemployment?

Contrary to popular anxieties, automation does not necessarily lead to widespread unemployment. Historically, automation has often led to increased employment by creating new markets, expanding production, and generating demand for new skills. While some jobs are inevitably displaced, a human-centered approach to technology development can help to distribute the benefits of automation more equitably and ensure opportunities for workers.

Key concept: Automation eliminates certain jobs, as it has for hundreds of years… However, automation usually lowers costs and increases quality, leading to vastly expanded demand, which triggers expanded production to serve growing markets, bringing benefits to many people.

5. Summary and Skeptic’s Corner

Human-Centered AI (HCAI) is a new synthesis that combines AI algorithms with a strong focus on human needs and values. This approach utilizes user-centered design methods, including user observation, stakeholder engagement, and iterative refinement, to create systems that amplify, augment, empower, and enhance human performance, while prioritizing human control over technology.

Key concept: HCAI is based on processes that extend user-experience design methods of user observation, stakeholder engagement, usability testing, iterative refinement, and continuing evaluation of human performance in the use of systems that employ AI algorithms such as machine learning.

6. Introduction: Rising above the Levels of Automation

A key concept in HCAI is a two-dimensional framework that moves beyond the traditional linear view of automation. Instead of assuming that more automation necessarily means less human control, this framework considers levels of automation separately from levels of human control. This allows designers to find the optimal balance for each application, ensuring human oversight and intervention where needed, while leveraging the benefits of automation for efficiency and reliability.

Key concept: This chapter opens up new possibilities by way of a two-dimensional framework of human-centered artificial intelligence (HCAI) that separates levels of automation/autonomy from levels of human control. The new guideline is to seek both high levels of human control and high levels of automation…

7. Defining Reliable, Safe, and Trustworthy Systems

Successful HCAI requires a multi-faceted approach to ensure that systems are not only functional, but also reliable, safe, and trustworthy. Reliable systems consistently deliver expected results, while safe systems minimize risks and prioritize human well-being. Building a culture of safety requires strong leadership commitment, careful hiring and training, and a robust system for reporting and learning from failures. Trustworthiness often relies on independent oversight and certification from external organizations.

Key concept: Reliable systems produce expected responses when needed… Cultures of safety are created by managers who focus on… Leadership commitment to safety, Hiring and training oriented to safety, Extensive reporting of failures and near misses, Internal review boards for problems and future plans, Alignment with industry standard practices… Trustworthy systems are discussed more frequently than ever…

8. Two-Dimensional HCAI Framework

The two-dimensional HCAI framework can be visualized as a graph with human control on one axis and computer automation on the other. This framework clarifies when full human control or full computer control is necessary, and when a combination of both is desirable. Simple, predictable tasks can often be heavily automated with minimal human intervention, while complex, uncertain tasks require more human oversight. Life-critical systems that require rapid action often fall into the high automation, low human control quadrant, necessitating rigorous design, testing, and monitoring.

Key concept: The desired goal is often, but not always, to create designs that are in the upper right quadrant… The lower right quadrant is home to relatively mature, well-understood systems for predictable tasks, for example, automobile automatic transmission or skid control on normal highways… For poorly understood and complex tasks with varying contexts of use, the upper right quadrant is needed… The lower right quadrant… is the home of computer autonomy requiring rapid action, for example, airbag deployment, anti-lock brakes, pacemakers, implantable defibrillators, or defensive weapons systems.

9. Design Guidelines and Examples

Human-centered design principles, such as the “Eight Golden Rules” of user interface design, can guide developers in creating effective HCAI systems. These rules emphasize consistency, user control, clear feedback, error prevention, and minimizing cognitive load. Implementing these rules alongside AI algorithms can help create more usable, trustworthy, and beneficial technologies.

Key concept: Table 9.1 Eight Golden Rules for design:

  1. Strive for consistency
  2. Seek universal usability
  3. Offer informative feedback
  4. Design dialogs to yield closure
  5. Prevent errors
  6. Permit easy reversal of actions
  7. Keep users in control
  8. Reduce short-term memory load

10. Summary and Skeptic’s Corner

The HCAI framework is a paradigm shift that moves beyond the traditional view of AI as seeking to replace humans. By separating levels of human control from levels of automation, it allows for the design of systems that empower users while leveraging the benefits of AI. Good design, guided by concerns about reliability, safety, and trustworthiness, is essential to achieve this balance.

Key concept: The HCAI framework separates the issue of human control from computer automation, making it clear that high levels of human control and high levels of automation can be achieved by good design.

11. Introduction: What are the Goals of AI Research?

AI research is driven by two main goals. The science goal seeks to understand human intelligence and replicate it in machines, focusing on developing computational agents that can perform tasks as well as or better than humans. The innovation goal, on the other hand, seeks to create technologies that amplify and enhance human abilities, empowering people to do more and be more creative.

Key concept: Science goal: … study “computational agents that act intelligently.”… Innovation goal: … develop computers that amplify human abilities so people can do the job themselves.

12. Science and Innovation Goals

The science goal in AI often leads to the development of “intelligent agents” – computers designed to think and act autonomously, potentially replacing humans in certain tasks. However, this approach can create user distrust and anxiety. The innovation goal, by contrast, emphasizes the creation of “supertools” – technologies that augment and empower human capabilities. This human-centered approach is more likely to be accepted and widely adopted.

Key concept: Those who pursue the science goal build cognitive computers that they describe as smart, intelligent, knowledgeable, and capable of thinking… The innovation goal community believes that computers are best designed to be supertools that amplify, augment, empower, and enhance humans.

13. Intelligent Agents and Supertools

The concept of supertools – technologies that amplify human abilities – has a rich history in computer science, dating back to pioneers like Douglas Engelbart. Engelbart’s work focused on augmenting human intellect through tools that enhanced collaboration, communication, and information processing.

Key concept: The supertool community included early HCAI researchers, such as Douglas Engelbart, whose vision of what it meant to augment human intellect was shown in his famed demonstration at the 1968 Fall Joint Computer Conference.

14. Teammates and Tele-bots

The metaphor of computers as teammates can be misleading. While it might seem appealing to have computers that act like human collaborators, this can lead to unrealistic expectations, confusion about responsibility, and reduced human control. A more accurate metaphor is that of “tele-bots” – tools that extend human capabilities under human control.

Key concept: My objection is that human teammates, partners, and collaborators are very different from computers. Instead of these terms, I prefer to use tele-bots to suggest human-controlled devices.

15. Assured Autonomy and Control Centers

The concept of “control centers” offers a more appropriate way to think about human interaction with highly automated systems. Instead of seeking full computer autonomy, control centers emphasize human oversight and intervention. Computers handle predictable tasks and low-level actions, while humans retain control over high-level goals and decisions.

Key concept: The control center metaphor suggests human decision-making for setting goals, supported by computers carrying out predictable tasks with low-level physical actions guided by sensors and carried out by effectors.

16. Social Robots and Active Appliances

While the idea of human-like social robots has long captured the imagination, the reality is that simpler, task-oriented “active appliances” have been far more successful. These appliances, like dishwashers, ovens, and thermostats, leverage automation for specific tasks, while retaining user control over key functions and settings. This approach can be extended to more complex domains, offering a promising path for HCAI development.

Key concept: The contrast [to social robots] is with widely used appliances, such as kitchen stoves, dishwashers, and coffee makers… I call the more ambitious designs active appliances because they have sensors, programmable actions, mobility, and diverse effectors.

17. Summary and Skeptic’s Corner

Successful HCAI designs leverage the unique capabilities of computers, such as powerful algorithms, vast data storage, and advanced sensors, to create supertools that amplify human abilities. Focusing on these distinctive capabilities, rather than trying to replicate human behavior, is more likely to lead to innovative and effective solutions.

Key concept: Designers who harness the distinctive features of computers, such as sophisticated algorithms, huge databases, superhuman sensors, information-abundant displays, and powerful effectors may produce more effective tele-bots that are appreciated by users as supertools.

18. Introduction: How to Bridge the Gap from Ethics to Practice

Moving from ethical principles to practical implementation requires robust governance structures. A four-layer model can guide these efforts. At the team level, software engineering practices should ensure reliability. At the organizational level, a strong safety culture is essential. Industry-specific independent oversight can provide trustworthy certification. Finally, government regulation can provide a framework for responsible AI development and deployment.

Key concept: Figure 18.2 Governance Structures for human-centered AI: The four levels are shown as nested ovals: (1) Team: reliable systems based on software engineering (SE) practices, (2) Organization: a well-developed safety culture based on sound management strategies, (3) Industry: trustworthy certification by external review, and (4) Government regulation.

19. Reliable Systems Based on Sound Software Engineering Practices

Reliable HCAI systems are built on sound software engineering practices. A crucial aspect is the use of audit trails to track system actions and enable retrospective analysis of failures. Incident databases, which collect information about publicly reported incidents, provide valuable data for improving system design, training, and operational practices.

Key concept: An important extension of audit trails are incident databases that capture records of publicly reported incidents in aviation, medicine, transportation, and cybersecurity.

20. Safety Culture through Business Management Strategies

Building a safety culture within organizations is crucial for HCAI development. This requires strong leadership commitment to safety, rigorous hiring and training practices, and a culture of openness in reporting failures and near misses. Analyzing these incidents helps identify areas for improvement and fosters a proactive approach to preventing future problems.

Key concept: Safety-oriented organizations regularly report on their failures (sometimes referred to as adverse events) and near misses (sometimes referred to as “close calls”).

21. Trustworthy Certification by Independent Oversight

Independent oversight provides a vital layer of accountability for HCAI systems. Organizations like auditing firms, insurance companies, and consumer advocacy groups can assess and certify the trustworthiness of AI systems, providing assurance to the public and helping to build trust in these technologies.

Key concept: The key to independent oversight is to support the legal, moral, and ethical principles of human or organizational responsibility and liability for their products and services.

22. Government Interventions and Regulations

Government intervention and regulation can play a vital role in ensuring responsible AI development. While some fear that regulation will stifle innovation, well-designed policies can actually accelerate progress by setting clear standards, promoting safety, and building public trust.

Key concept: While there is understandable concern from major technology companies that government regulation will limit innovation, well-designed regulation can accelerate innovation as it has done for automobile safety and fuel efficiency.

23. Summary and Skeptic’s Corner

The shift to HCAI requires a change in mindset from a focus on algorithm optimization to a focus on human needs and values. This can be a challenge for those accustomed to traditional AI approaches, but a human-centered approach is essential to create technologies that are not only effective, but also responsible, ethical, and trustworthy.

Key concept: The inclusion of human-centered thinking will be difficult for those who have long seen algorithms as the dominant goal. They will question the validity of this new synthesis, but human-centered thinking and practices put AI algorithms and systems to work for commercially successful products and services.

24. Introduction: Driving HCAI forward

There are many exciting research and development opportunities in HCAI, ranging from technical improvements in AI algorithms to design innovations that enhance user control and understanding. Collaboration between researchers, developers, business leaders, and policy-makers will be essential to advance the field and create a more human-centered future for AI.

Key concept: These many goals at different levels of specificity leave room for wide participation and many opportunities to contribute.

25. Assessing Trustworthiness

Assessing the attributes of HCAI systems, such as trustworthiness, fairness, and explainability, is a complex challenge. Developing clear definitions, reliable measurement methods, and appropriate assessment processes will require ongoing research and collaboration among stakeholders.

Key concept: Table 25.1 Frequently mentioned attributes of HCAI systems organized into five categories:

  • General virtues of the system itself: Trustworthy, Responsible/humane, Ethical design, Ethical data, Ethical use
  • Well-being/benevolence
  • Secure, Private
  • Performs well in practice: Robust/agile, Reliable/dependable, Available, Resilient/adaptive
  • Testable/verifiable/validatable/certifiable, Safe, Clarity to stakeholders (Accurate, Fair/unbiased, Accountable/liable, Transparent, Interpretable/explainable/intelligible/explicable, Usable), Enables independent oversight (Auditable, Trackable, Traceable, Redressable, Insurable, Recorded), Open, Certifiable, Complies with accepted practices (Compliant with standards, Compliant with accepted software engineering workflows)

26. Caring for and Learning from Our Older Adults

HCAI can be applied to address the diverse needs of older adults. Examples include mobility assistance, personalized healthcare recommendations, medication management, and social connection tools. Designing for this population requires a deep understanding of their needs, preferences, and capabilities, as well as consideration for ethical issues such as privacy and autonomy.

Key concept: Table 26.1 Informal list of older adult needs and examples of tasks with devices that serve those needs: Mobility, Food preparation, cooking, and cleaning, Personal care, Medical care, Medical monitoring, Wellness and emotional support, Shopping, Finances, Information and education, News and entertainment, Communication and human connection, Security, House cleaning, House maintenance, Gardening and pet care, Mentoring, Contributing, Creative projects

27. Summary and Skeptic’s Corner

The future of HCAI depends on a shift in mindset from a technology-centric to a human-centered approach. This means prioritizing human values and societal benefits, and ensuring that technology development is driven by a desire to empower people, enrich communities, and inspire hope. By focusing on reliability, safety, trustworthiness, and ethical considerations, we can create a more positive and beneficial future for AI.

Key concept: The design aspirations are for reliable, safe, and trustworthy systems that support progress in racial justice, greater income equality, and environmental preservation.

Essential Questions

1. What is Human-Centered AI (HCAI)?

Human-Centered AI (HCAI) is a new approach to AI development that prioritizes human control, values, and well-being. It shifts the focus from optimizing algorithms for autonomous performance to creating systems that amplify and enhance human capabilities while ensuring meaningful human oversight. HCAI emphasizes user experience design, stakeholder engagement, and the importance of designing for reliability, safety, and trustworthiness. This approach aims to create technologies that empower people and improve society, rather than replacing humans with machines.

2. How does HCAI rethink the traditional approach to automation?

The traditional linear view of automation assumes a trade-off: more automation means less human control. HCAI challenges this assumption with a two-dimensional framework that considers levels of automation separately from levels of human control. This allows designers to choose the optimal balance for each application. Simple, predictable tasks can be heavily automated, while complex, uncertain, or life-critical systems require more human control and oversight. This approach ensures that humans remain in the loop, making critical decisions and guiding the technology towards desired outcomes.

3. Why does HCAI reject the metaphors of computers as ‘intelligent agents’ or ‘teammates’?

The common metaphors of “intelligent agents” and “teammates” used to describe AI can be misleading and ultimately harmful. These metaphors suggest that computers are capable of independent thought, collaboration, and even moral responsibility. This can lead to unrealistic expectations, confusion about liability, and a reduction in human control. HCAI favors metaphors like “supertools” and “tele-bots,” which emphasize human control and the role of technology as a tool for augmenting human abilities.

4. How can HCAI be used to address societal challenges and improve lives?

HCAI can be applied to address a wide range of societal challenges, including improving healthcare, promoting citizen science, fighting misinformation, and developing new treatments for diseases. By designing systems that are reliable, safe, trustworthy, and easily understandable, HCAI can empower individuals and communities, foster collaboration, and promote positive social change. The key is to leverage the power of AI algorithms while ensuring human control, ethical considerations, and a focus on user needs.

5. What are the key challenges in evaluating HCAI systems, and how can we assess their trustworthiness?

Assessing attributes like trustworthiness, fairness, and explainability is a significant challenge. Objective measurement is often impossible, requiring new methods and metrics for evaluating HCAI systems. This includes rigorous testing of algorithms and training data, evaluation of real-world performance, and consideration of user experience and stakeholder feedback. Ultimately, building trust in HCAI relies on transparency, accountability, and robust governance structures to ensure responsible development and deployment.

Key Takeaways

1. HCAI emphasizes transparency and explainability.

The goal is not to blindly trust AI systems but to understand how they work and be able to challenge their outcomes when necessary. Transparency, explainability, and user-friendly interfaces that provide insights into the system’s decision-making process are crucial for building trust and ensuring responsible use.

Practical Application:

When designing a medical diagnosis system, the system should give clear explanations for its diagnoses, allowing doctors to understand the reasoning behind the AI’s suggestions and decide whether to accept or challenge them. This ensures that the doctor remains in control and responsible for the final diagnosis and treatment plan.

2. HCAI prioritizes human control over full autonomy.

HCAI recognizes that full computer autonomy is often not desirable and can even be dangerous. Instead, the focus should be on designing systems that augment and empower human capabilities while preserving human control over critical functions.

Practical Application:

An autonomous vehicle should not be designed as a completely self-driving car but as a “safety-first” car that assists the driver and intervenes only when necessary to prevent accidents. This keeps the driver engaged and responsible for the overall driving task, while leveraging AI for tasks like lane-keeping and collision avoidance.

3. HCAI requires stakeholder engagement and a deep understanding of the context of use.

Understanding the context of use and the needs of all stakeholders is essential for designing successful HCAI systems. Engaging with users, domain experts, and other stakeholders throughout the design process leads to more relevant, usable, and trustworthy systems.

Practical Application:

When developing a new financial trading system, designers should engage with traders, risk managers, and regulators throughout the process to understand their needs, concerns, and workflows. This collaborative approach will help create a system that is not only effective but also aligns with ethical considerations and regulatory requirements.

4. HCAI addresses the issue of bias in AI systems.

HCAI acknowledges the potential for bias in AI systems and advocates for rigorous testing and mitigation strategies. This includes carefully curating and evaluating training data, implementing fairness-enhancing interventions, and conducting ongoing monitoring to identify and address any emerging biases.

Practical Application:

A team developing a facial recognition system should be trained to understand the potential biases in training data and the consequences of inaccurate or unfair outcomes. A dedicated bias testing leader can ensure that the system is evaluated for fairness across different demographics and use cases.

5. HCAI favors active appliances over human-like robots.

The desire to create human-like robots often leads to unrealistic expectations and limited success. HCAI encourages designers to think beyond anthropomorphic forms and focus on developing task-oriented, user-friendly solutions that address specific human needs and enhance human abilities.

Practical Application:

Instead of building a humanoid robot to care for older adults, HCAI suggests focusing on specific needs and designing active appliances that empower older adults to maintain their independence and engage in meaningful activities. This could include a smart medication dispenser, a personalized exercise coach, or a communication tool that connects older adults with their families and communities.

Suggested Deep Dive

Chapter: Two-Dimensional HCAI Framework (Chapter 8)

This chapter provides a practical tool for visualizing and understanding the complex relationship between human control and computer automation in different applications. It’s a valuable resource for AI product engineers as it offers a framework for making design decisions that balance the benefits of automation with the need for human oversight.

Memorable Quotes

Preface. 6

Human-Centered AI offers fresh thinking for designers to imagine new strategies that support human self-efficacy, creativity, responsibility, and social connections.

What is Human-Centered Artificial Intelligence?. 20

Reframing established beliefs with a fresh vision is among the most powerful tools for change.

Are People and Computers in the Same Category?. 43

Making a robot that simulates what a human does has value, but I’m more attracted to making supertools that dramatically amplify human abilities by a hundred- or thousand-fold.

Two-Dimensional HCAI Framework. 85

“Imperceptible AI is not ethical AI.”

Introduction: What are the Goals of AI Research?. 113

Steve Jobs famously described a computer as “a bicycle for our minds,” clearly conveying that computers were best designed to amplify human abilities, while preserving human control.

Comparative Analysis

While Shneiderman’s “Human-Centered AI” echoes some of the concerns raised by other prominent voices in AI ethics, such as Cathy O’Neil’s “Weapons of Math Destruction” and Nick Bostrom’s “Superintelligence,” it offers a distinctly more optimistic and pragmatic approach. Unlike those cautionary tales, Shneiderman doesn’t focus on the potential dangers of AI, but instead presents a framework for responsible design and development that can mitigate risks and maximize benefits. He agrees with Stuart Russell and Peter Norvig’s “Artificial Intelligence: A Modern Approach” on the transformative potential of AI, but argues for a stronger emphasis on human values and user experience. Shneiderman’s focus on supertools and control centers, rather than fully autonomous systems, aligns with the ideas presented in books like “Human + Machine” by Paul R. Daugherty and H. James Wilson, which emphasizes the importance of human-AI collaboration. “Human-Centered AI” stands out for its practical, actionable recommendations for governance structures, its clear articulation of design principles, and its emphasis on the positive societal impact that HCAI can achieve.

Reflection

Shneiderman’s “Human-Centered AI” makes a compelling case for a paradigm shift in AI development, but its optimism might not fully address the complex realities of the field. While he rightly highlights the successes of human-centered approaches in areas like digital cameras and navigation systems, the application of these principles to more complex and less predictable domains like self-driving cars or social media moderation poses significant challenges. The book’s emphasis on trustworthiness and the importance of human control is laudable, but the practical implementation of these concepts, especially in the face of commercial pressures and the allure of full automation, requires further exploration. Additionally, the book might benefit from a more nuanced discussion of the social and political implications of HCAI, particularly concerning issues like power dynamics, algorithmic bias, and the potential for unintended consequences. Despite these limitations, “Human-Centered AI” is a valuable contribution to the growing field of AI ethics. Its focus on human values, design principles, and governance structures provides a roadmap for developing technologies that empower people and benefit society.

Flashcards

What is Human-Centered AI (HCAI)?

Human-Centered AI is a new synthesis that combines AI algorithms with human-centered design thinking to create systems that amplify and enhance human capabilities while ensuring human control and aligning with human values.

What are the two main goals of AI research?

The science goal aims to understand and replicate human intelligence in machines, while the innovation goal focuses on creating technologies that amplify and enhance human capabilities.

What are the characteristics of HCAI systems?

HCAI systems are designed to be supertools that empower users and enhance human performance. They emphasize human control, transparency, and the ability to understand and intervene in the system’s decision-making process.

What are the key attributes of reliable, safe, and trustworthy HCAI systems?

Reliability refers to a system’s ability to consistently produce expected results, while safety prioritizes human well-being and minimizes risks. Trustworthy systems are those that deserve the trust stakeholders place in them, often confirmed through independent oversight and certification.

How does the HCAI framework approach automation?

The two-dimensional HCAI framework considers levels of automation separately from levels of human control, allowing for nuanced design choices that balance automation with human oversight. The goal is not always full automation, but finding the optimal balance for each application.

Why are metaphors like ‘teammates’ and ‘intelligent agents’ problematic for HCAI?

The metaphor of AI as teammates can lead to unrealistic expectations and reduce human control. HCAI favors metaphors like “supertools” and “tele-bots,” which emphasize technology as a tool that extends human capabilities under human control.

What are the differences between explainable user interfaces and prospective design?

Explainable user interfaces help users understand how the AI system made a decision, while prospective design aims to prevent the need for explanations by giving users more control and transparency.

What are the characteristics of a safety culture in HCAI development?

A safety culture prioritizes safety through strong leadership commitment, continuous monitoring of failures and near misses, and open communication about risks and potential problems.

What are the roles of independent oversight in HCAI?

Auditing firms, insurance companies, consumer advocacy groups, and professional organizations can provide independent oversight of HCAI systems, helping to ensure their trustworthiness and build public trust.

What is the role of government regulation in HCAI?

Government regulation can play a crucial role in setting standards, promoting safety, and protecting public interest in HCAI, but should be carefully designed to avoid stifling innovation.